A PTAS for Agnostically Learning Halfspaces
نویسنده
چکیده
We present a PTAS for agnostically learning halfspaces w.r.t. the uniform distribution on the d dimensional sphere. Namely, we show that for every μ > 0 there is an algorithm that runs in time poly ( d, 1 ) , and is guaranteed to return a classifier with error at most (1 + μ)opt + , where opt is the error of the best halfspace classifier. This improves on Awasthi, Balcan and Long [2] who showed an algorithm with an (unspecified) constant approximation ratio. Our algorithm combines the classical technique of polynomial regression (e.g. [22, 16]), together with the new localization technique of [2]. ∗Department of Mathematics, Hebrew University, Jerusalem 91904, Israel. [email protected]
منابع مشابه
Embedding Hard Learning Problems Into Gaussian Space
We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfs...
متن کاملAgnostically Learning Halfspaces with Margin Errors
We describe and analyze a new algorithm for agnostically learning half-spaces with respect to the margin error rate. Roughly speaking, this corre-sponds to the worst-case error rate after each point is perturbed by a noisevector of length at most μ. Margin based analysis is widely used in learningtheory and is considered the most successful theoretical explanation for thesta...
متن کاملPotential-Based Agnostic Boosting
We prove strong noise-tolerance properties of a potential-based boosting algorithm, similar to MadaBoost (Domingo and Watanabe, 2000) and SmoothBoost (Servedio, 2003). Our analysis is in the agnostic framework of Kearns, Schapire and Sellie (1994), giving polynomial-time guarantees in presence of arbitrary noise. A remarkable feature of our algorithm is that it can be implemented without reweig...
متن کاملComplexity Theoretic Limitations on Learning DNF's
Using the recently developed framework of [14], we show that under a natural assumption on the complexity of refuting random K-SAT formulas, learning DNF formulas is hard. Furthermore, the same assumption implies the hardness of learning intersections of ω(log(n)) halfspaces, agnostically learning conjunctions, as well as virtually all (distribution free) learning problems that were previously ...
متن کاملLearning Halfspaces Under Log-Concave Densities: Polynomial Approximations and Moment Matching
We give the first polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces with respect to any log-concave distribution (for any constant accuracy parameter). This result was not known even for the case of PAC learning the intersection of two halfspaces. We give two very different proofs of this result. The first develops a theory of polynomial approxi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015